How do PDP models learn quasiregularity?
نویسندگان
چکیده
Parallel distributed processing (PDP) models have had a profound impact on the study of cognition. One domain in which they have been particularly influential is learning quasiregularity, in which mastery requires both learning regularities that capture the majority of the structure in the input plus learning exceptions that violate the regularities. How PDP models learn quasiregularity is still not well understood. Small- and large-scale analyses of a feedforward, 3-layer network were carried out to address 2 fundamental issues about network functioning: how the model can learn both regularities and exceptions without sacrificing generalizability and the nature of the hidden representation that makes this learning possible. Results show that capacity-limited learning pressures the network to form componential representations, which ensures good generalizability. Small and highly local perturbations of this representational system allow exceptions to be learned while minimally disrupting generalizability. Theoretical and methodological implications of the findings are discussed.
منابع مشابه
Running head: LEARNING QUASIREGULARITY 1 THEORETICAL NOTE How Do PDP Models Learn Quasiregularity?
Parallel Distributed Processing (PDP) models have had a profound impact on the study of cognition. One domain in which they have been particularly influential is quasiregular learning, in which mastery requires both learning regularities that capture the majority of the structure in the input plus learning exceptions that violate the regularities. How PDP models learn quasiregularity is still n...
متن کاملHow Do Medical Students Learn Professionalism During Clinical Education? A Qualitative Study of Faculty Members' and Interns' Experiences
Introduction: Influence the professional personality development and related behaviors is one of the most challenging and complicated issues in medical education. Medical students acquire their professional attitudes gradually during their education in clinical wards which profoundly affects their future manner. This study was performed in order to answer this core question: "Which experiences ...
متن کاملModeling a semantic PDP network as a probabilistic classifier
McClelland and Rogers [1, 2] and Kemp and Tenenbaum [3] present two seemingly different models of how people reason about the attributes that different entities have. McClelland and Rogers present a neural network which is trained to predict, given the name of an entity and a particular relation (such as “can” or “has”), an appropriate set of responses to that query. Kemp and Tenenbaum, on the ...
متن کاملUsing computational, parallel distributed processing networks to model rehabilitation in patients with acquired dyslexia: An initial investigation
Background: Traditional cognitive neuropsychological models are good at diagnosing deficits but are limited when it comes to studying recovery and rehabilitation. Parallel distributed processing (PDP) models have more potential in this regard as they are dynamic and can actually learn. However, to date very little work has been done in using PDP models to study recovery and rehabilitation. Aims...
متن کاملChallenging the widespread assumption that connectionism and distributed representations go hand-in-hand.
One of the central claims associated with the parallel distributed processing approach popularized by D.E. Rumelhart, J.L. McClelland and the PDP Research Group is that knowledge is coded in a distributed fashion. Localist representations within this perspective are widely rejected. It is important to note, however, that connectionist networks can learn localist representations and many connect...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Psychological review
دوره 120 4 شماره
صفحات -
تاریخ انتشار 2013